-
Couldn't load subscription status.
- Fork 2.8k
[GPU] enable simd16 version for convolution_gpu_mmad_b_fs_yx_fsv32 #32501
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but please add tests
src/plugins/intel_gpu/src/kernel_selector/cl_kernels/convolution_gpu_mmad_b_fs_yx_fsv32.cl
Outdated
Show resolved
Hide resolved
| os_is_yx_isa8_osv8_isv4, ///< format for weights for MMAD convolution | ||
| os_is_zyx_isa8_osv8_isv4, ///< format for weights for MMAD convolution | ||
| os_is_yx_isa8_osv16_isv4, ///< format for weights for fully connected MMAD | ||
| os_is_zyx_isa8_osv16_isv4, ///< format for weights for fully connected MMAD |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
random spot) could you check why onednn convolution kernel is not chosen? For GPU with XMX, our expectation is to use OneDNN convolutions, instead of cldnn convolutions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because weights are u8
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.

Details:
Tickets: